596 research outputs found
Weakly Supervised Audio Source Separation via Spectrum Energy Preserved Wasserstein Learning
Separating audio mixtures into individual instrument tracks has been a long
standing challenging task. We introduce a novel weakly supervised audio source
separation approach based on deep adversarial learning. Specifically, our loss
function adopts the Wasserstein distance which directly measures the
distribution distance between the separated sources and the real sources for
each individual source. Moreover, a global regularization term is added to
fulfill the spectrum energy preservation property regardless separation. Unlike
state-of-the-art weakly supervised models which often involve deliberately
devised constraints or careful model selection, our approach need little prior
model specification on the data, and can be straightforwardly learned in an
end-to-end fashion. We show that the proposed method performs competitively on
public benchmark against state-of-the-art weakly supervised methods
Timed Automata Approach for Motion Planning Using Metric Interval Temporal Logic
In this paper, we consider the robot motion (or task) planning problem under
some given time bounded high level specifications. We use metric interval
temporal logic (MITL), a member of the temporal logic family, to represent the
task specification and then we provide a constructive way to generate a timed
automaton and methods to look for accepting runs on the automaton to find a
feasible motion (or path) sequence for the robot to complete the task.Comment: Full Version for ECC 201
Plato and Trumpism: Look at Trumpism via the lens of Plato’s Republic
This paper offers Plato\u27s Republic as a lens to understand Trump Movement in today\u27s America. On this basis, my paper has been divided into three parts. The first presents a political model to explain the underlying force beneath the degeneration of the political regime in Plato\u27s theory. The second applies that model to conclude political features of populist movements and psychological traits of the tyrant. The last part applies above conclusions to offer a way to interpret the formation of Trumpism and offer possible solution to it
Recommended from our members
Metabolic analysis of single cell gene expression data: What can we learn?
The advent of single cell profiling technologies has brought unprecedented resolution to cell heterogeneity in key human tissues. A significant remaining challenge is to interpret this cell variation in terms of meaningful functional differences and interactions between cell types. In this study, we perform a metabolic reconstruction-based assessment of transcriptional heterogeneity in the human brain and kidney. We focus specifically on transporters as their expression is associated with both metabolic interactions between cell types and uptake of drugs. We find that: 1) we are able to identify drug-transporter relationships through structural homology between drugs and native transporter substrates, 2) we observe concomitant brain-region specific expression differences in key transporters that may impact therapeutic impact, 3) upstream metabolic genes have expression that correlates with the transporter expression, suggesting that transporter differences are likely physiologically significant, 4) examination of kidney data shows differential expression across different regions and cell types in the human kidney, 5) metabolic reconstructions provide useful interpretation to help understand the significance of single cell gene expression differences. Native metabolic activity of transporters is postulated through expression of metabolic genes with activities that are correlated as determined by metabolic flux modeling. This work illustrates the types of higher order interactions that can be elucidated from single cell profiling data and paves the way to clinical interventions targeting particular cell types and cell interactions
IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization
Fine-tuning pre-trained language models (PTLMs), such as BERT and its better
variant RoBERTa, has been a common practice for advancing performance in
natural language understanding (NLU) tasks. Recent advance in representation
learning shows that isotropic (i.e., unit-variance and uncorrelated) embeddings
can significantly improve performance on downstream tasks with faster
convergence and better generalization. The isotropy of the pre-trained
embeddings in PTLMs, however, is relatively under-explored. In this paper, we
analyze the isotropy of the pre-trained [CLS] embeddings of PTLMs with
straightforward visualization, and point out two major issues: high variance in
their standard deviation, and high correlation between different dimensions. We
also propose a new network regularization method, isotropic batch normalization
(IsoBN) to address the issues, towards learning more isotropic representations
in fine-tuning by dynamically penalizing dominating principal components. This
simple yet effective fine-tuning method yields about 1.0 absolute increment on
the average of seven NLU tasks.Comment: AAAI 202
- …